Goto

Collaborating Authors

 multi-dataset training



Multi-dataset Training of Transformers for Robust Action Recognition

Neural Information Processing Systems

We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition. We build our method on Transformers for its efficacy. Although we have witnessed great progress for video action recognition in the past decade, it remains challenging yet valuable how to train a single model that can perform well across multiple datasets. Here, we propose a novel multi-dataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss, aiming tolearn robust representations for action recognition.



A Additional results of multi-dataset training

Neural Information Processing Systems

OCHuman val and test set. The results are available in Table 11. As demonstrated in Table 12, ViTPose variants obtain better performance on both single joint evaluation and average evaluation, e.g ., ViTPose-B, ViTPose-L, and ViTPose-H achieve 93.3, 94.0, and 94.1 PCKh is adopted as the evaluation metric. Similarly, we evaluate the performance of ViTPose on the AI Challenger val set with the corresponding decoder head. ViTPose-G achieves the best 43.2 AP on the dataset with The dataset is under the CC-BY -4.0 MPII dataset is under the BSD license and contains 15K images and 22K human instances for training. There are at most 16 human keypoints for each instance annotated in this dataset.


Multi-dataset Training of Transformers for Robust Action Recognition

Neural Information Processing Systems

We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition. We build our method on Transformers for its efficacy. Although we have witnessed great progress for video action recognition in the past decade, it remains challenging yet valuable how to train a single model that can perform well across multiple datasets. Here, we propose a novel multi-dataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss, aiming tolearn robust representations for action recognition. We verify the effectiveness of our method on five challenging datasets, Kinetics-400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2 datasets.